1,706 research outputs found

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    An Efficient Representation for Filtrations of Simplicial Complexes

    Get PDF
    A filtration over a simplicial complex KK is an ordering of the simplices of KK such that all prefixes in the ordering are subcomplexes of KK. Filtrations are at the core of Persistent Homology, a major tool in Topological Data Analysis. In order to represent the filtration of a simplicial complex, the entire filtration can be appended to any data structure that explicitly stores all the simplices of the complex such as the Hasse diagram or the recently introduced Simplex Tree [Algorithmica '14]. However, with the popularity of various computational methods that need to handle simplicial complexes, and with the rapidly increasing size of the complexes, the task of finding a compact data structure that can still support efficient queries is of great interest. In this paper, we propose a new data structure called the Critical Simplex Diagram (CSD) which is a variant of the Simplex Array List (SAL) [Algorithmica '17]. Our data structure allows one to store in a compact way the filtration of a simplicial complex, and allows for the efficient implementation of a large range of basic operations. Moreover, we prove that our data structure is essentially optimal with respect to the requisite storage space. Finally, we show that the CSD representation admits fast construction algorithms for Flag complexes and relaxed Delaunay complexes.Comment: A preliminary version appeared in SODA 201

    Building Efficient and Compact Data Structures for Simplicial Complexes

    Get PDF
    The Simplex Tree (ST) is a recently introduced data structure that can represent abstract simplicial complexes of any dimension and allows efficient implementation of a large range of basic operations on simplicial complexes. In this paper, we show how to optimally compress the Simplex Tree while retaining its functionalities. In addition, we propose two new data structures called the Maximal Simplex Tree (MxST) and the Simplex Array List (SAL). We analyze the compressed Simplex Tree, the Maximal Simplex Tree, and the Simplex Array List under various settings.Comment: An extended abstract appeared in the proceedings of SoCG 201

    On the Sensitivity Conjecture for Disjunctive Normal Forms

    Get PDF
    The sensitivity conjecture of Nisan and Szegedy [CC\u2794] asks whether for any Boolean function f, the maximum sensitivity s(f), is polynomially related to its block sensitivity bs(f), and hence to other major complexity measures. Despite major advances in the analysis of Boolean functions over the last decade, the problem remains widely open. In this paper, we consider a restriction on the class of Boolean functions through a model of computation (DNF), and refer to the functions adhering to this restriction as admitting the Normalized Block property. We prove that for any function f admitting the Normalized Block property, bs(f) <= 4 * s(f)^2. We note that (almost) all the functions mentioned in literature that achieve a quadratic separation between sensitivity and block sensitivity admit the Normalized Block property. Recently, Gopalan et al. [ITCS\u2716] showed that every Boolean function f is uniquely specified by its values on a Hamming ball of radius at most 2 * s(f). We extend this result and also construct examples of Boolean functions which provide the matching lower bounds

    Ham Sandwich is Equivalent to Borsuk-Ulam

    Get PDF
    The Borsuk-Ulam theorem is a fundamental result in algebraic topology, with applications to various areas of Mathematics. A classical application of the Borsuk-Ulam theorem is the Ham Sandwich theorem: The volumes of any n compact sets in R^n can always be simultaneously bisected by an (n-1)-dimensional hyperplane. In this paper, we demonstrate the equivalence between the Borsuk-Ulam theorem and the Ham Sandwich theorem. The main technical result we show towards establishing the equivalence is the following: For every odd polynomial restricted to the hypersphere f:S^n->R, there exists a compact set A in R^{n+1}, such that for every x in S^n we have f(x)=vol(A cap H^+) - vol(A cap H^-), where H is the oriented hyperplane containing the origin with x as the normal. A noteworthy aspect of the proof of the above result is the use of hyperspherical harmonics. Finally, using the above result we prove that there exist constants n_0, epsilon_0>0 such that for every n>= n_0 and epsilon <= epsilon_0/sqrt{48n}, any query algorithm to find an epsilon-bisecting (n-1)-dimensional hyperplane of n compact set in [-n^4.51,n^4.51]^n, even with success probability 2^-Omega(n), requires 2^Omega(n) queries

    Deterministic Replacement Path Covering

    Full text link
    In this article, we provide a unified and simplified approach to derandomize central results in the area of fault-tolerant graph algorithms. Given a graph GG, a vertex pair (s,t)∈V(G)×V(G)(s,t) \in V(G)\times V(G), and a set of edge faults F⊆E(G)F \subseteq E(G), a replacement path P(s,t,F)P(s,t,F) is an ss-tt shortest path in G∖FG \setminus F. For integer parameters L,fL,f, a replacement path covering (RPC) is a collection of subgraphs of GG, denoted by GL,f={G1,…,Gr}\mathcal{G}_{L,f}=\{G_1,\ldots, G_r \}, such that for every set FF of at most ff faults (i.e., ∣F∣≤f|F|\le f) and every replacement path P(s,t,F)P(s,t,F) of at most LL edges, there exists a subgraph Gi∈GL,fG_i\in \mathcal{G}_{L,f} that contains all the edges of PP and does not contain any of the edges of FF. The covering value of the RPC GL,f\mathcal{G}_{L,f} is then defined to be the number of subgraphs in GL,f\mathcal{G}_{L,f}. We present efficient deterministic constructions of (L,f)(L,f)-RPCs whose covering values almost match the randomized ones, for a wide range of parameters. Our time and value bounds improve considerably over the previous construction of Parter (DISC 2019). We also provide an almost matching lower bound for the value of these coverings. A key application of our above deterministic constructions is the derandomization of the algebraic construction of the distance sensitivity oracle by Weimann and Yuster (FOCS 2010). The preprocessing and query time of the our deterministic algorithm nearly match the randomized bounds. This resolves the open problem of Alon, Chechik and Cohen (ICALP 2019)

    On Closest Pair in Euclidean Metric: Monochromatic is as Hard as Bichromatic

    Get PDF
    Given a set of n points in R^d, the (monochromatic) Closest Pair problem asks to find a pair of distinct points in the set that are closest in the l_p-metric. Closest Pair is a fundamental problem in Computational Geometry and understanding its fine-grained complexity in the Euclidean metric when d=omega(log n) was raised as an open question in recent works (Abboud-Rubinstein-Williams [FOCS\u2717], Williams [SODA\u2718], David-Karthik-Laekhanukit [SoCG\u2718]). In this paper, we show that for every p in R_{>= 1} cup {0}, under the Strong Exponential Time Hypothesis (SETH), for every epsilon>0, the following holds: - No algorithm running in time O(n^{2-epsilon}) can solve the Closest Pair problem in d=(log n)^{Omega_{epsilon}(1)} dimensions in the l_p-metric. - There exists delta = delta(epsilon)>0 and c = c(epsilon)>= 1 such that no algorithm running in time O(n^{1.5-epsilon}) can approximate Closest Pair problem to a factor of (1+delta) in d >= c log n dimensions in the l_p-metric. In particular, our first result is shown by establishing the computational equivalence of the bichromatic Closest Pair problem and the (monochromatic) Closest Pair problem (up to n^{epsilon} factor in the running time) for d=(log n)^{Omega_epsilon(1)} dimensions. Additionally, under SETH, we rule out nearly-polynomial factor approximation algorithms running in subquadratic time for the (monochromatic) Maximum Inner Product problem where we are given a set of n points in n^{o(1)}-dimensional Euclidean space and are required to find a pair of distinct points in the set that maximize the inner product. At the heart of all our proofs is the construction of a dense bipartite graph with low contact dimension, i.e., we construct a balanced bipartite graph on n vertices with n^{2-epsilon} edges whose vertices can be realized as points in a (log n)^{Omega_epsilon(1)}-dimensional Euclidean space such that every pair of vertices which have an edge in the graph are at distance exactly 1 and every other pair of vertices are at distance greater than 1. This graph construction is inspired by the construction of locally dense codes introduced by Dumer-Miccancio-Sudan [IEEE Trans. Inf. Theory\u2703]

    Towards a General Direct Product Testing Theorem

    Get PDF
    The Direct Product encoding of a string a in {0,1}^n on an underlying domain V subseteq ([n] choose k), is a function DP_V(a) which gets as input a set S in V and outputs a restricted to S. In the Direct Product Testing Problem, we are given a function F:V -> {0,1}^k, and our goal is to test whether F is close to a direct product encoding, i.e., whether there exists some a in {0,1}^n such that on most sets S, we have F(S)=DP_V(a)(S). A natural test is as follows: select a pair (S,S\u27)in V according to some underlying distribution over V x V, query F on this pair, and check for consistency on their intersection. Note that the above distribution may be viewed as a weighted graph over the vertex set V and is referred to as a test graph. The testability of direct products was studied over various domains and test graphs: Dinur and Steurer (CCC \u2714) analyzed it when V equals the k-th slice of the Boolean hypercube and the test graph is a member of the Johnson graph family. Dinur and Kaufman (FOCS \u2717) analyzed it for the case where V is the set of faces of a Ramanujan complex, where in this case V=O_k(n). In this paper, we study the testability of direct products in a general setting, addressing the question: what properties of the domain and the test graph allow one to prove a direct product testing theorem? Towards this goal we introduce the notion of coordinate expansion of a test graph. Roughly speaking a test graph is a coordinate expander if it has global and local expansion, and has certain nice intersection properties on sampling. We show that whenever the test graph has coordinate expansion then it admits a direct product testing theorem. Additionally, for every k and n we provide a direct product domain V subseteq (n choose k) of size n, called the Sliding Window domain for which we prove direct product testability
    • …
    corecore